codeintegrity-ai
ModernBERT PromptGuard is a high-performance binary classifier specifically designed to detect malicious prompts in large language model applications, including prompt injection and jailbreak attacks.
leolee99
InjecGuard is a protective model against prompt injection attacks for large language models (LLMs), capable of effectively identifying and defending against malicious instructions while reducing over-defense issues.
proventra
A prompt injection detection model fine-tuned based on microsoft/mdeberta-v3-base, trained with multiple datasets to identify malicious prompt injection attacks.
dcarpintero
A lightweight model based on ModernBERT, focused on identifying malicious prompt injection attacks and providing AI security protection.
A lightweight model based on ModernBERT (large model edition), specifically designed to identify malicious prompts (i.e., prompt injection attacks).
skshreyas714
Prompt Guard is a text classification model designed to detect prompt attacks, capable of identifying malicious prompt injections and jailbreak attempts.
GenTelLab
GenTel-Shield is a model focused on detecting and defending against prompt injection attacks, effectively distinguishing malicious samples from benign ones.
meta-llama
PromptGuard is a text classification model designed to detect and protect against LLM prompt attacks, capable of identifying malicious prompt injections and jailbreak attempts.